5 research outputs found

    Evaluating Engagement in Digital Narratives from Facial Data

    Get PDF
    Engagement researchers indicate that the engagement level of people in a narrative has an influence on people's subsequent story-related attitudes and beliefs, which helps psychologists understand people's social behaviours and personal experience. With the arrival of multimedia, the digital narrative combines multimedia features (e.g. varying images, music and voiceover) with traditional storytelling. Research on digital narratives has been widely used in helping students gain problem-solving and presentation skills as well as supporting child psychologists investigating children's social understanding such as family/peer relationships through completing their digital narratives. However, there is little study on the effect of multimedia features in digital narratives on the engagement level of people. This research focuses on measuring the levels of engagement of people in digital narratives and specifically on understanding the media effect of digital narratives on people's engagement levels. Measurement tools are developed and validated through analyses of facial data from different age groups (children and young adults) in watching stories with different media features of digital narratives. Data sources used in this research include a questionnaire with Smileyometer scale and the observation of each participant's facial behaviours

    Toward a Machine Learning Framework for Understanding Affective Tutorial Interaction

    No full text
    Abstract. Affect and cognition intertwine throughout human experience. Research into this interplay during learning has identified relevant cognitiveaffective states, but recognizing them poses significant challenges. Among multiple promising approaches for affect recognition, analyzing facial expression may be particularly informative. Descriptive computational models of facial expression and affect, such as those enabled by machine learning, aid our understanding of tutorial interactions. Hidden Markov modeling, in particular, is useful for encoding patterns in sequential data. This paper presents a descriptive hidden Markov model built upon facial expression data and tutorial dialogue within a task-oriented human-human tutoring corpus. The model reveals five frequently occurring patterns of affective tutorial interaction across text-based tutorial dialogue sessions. The results show that hidden Markov modeling holds potential for the semiautomated understanding of affective interaction, which may contribute to the development of affect-informed intelligent tutoring systems

    Predicting Learning and Engagement in Tutorial Dialogue: A Personality-Based Model

    No full text
    ABSTRACT A variety of studies have established that users with different personality profiles exhibit different patterns of behavior when interacting with a system. Although patterns of behavior have been successfully used to predict cognitive and affective outcomes of an interaction, little work has been done to identify the variations in these patterns based on user personality profile. In this paper, we model sequences of facial expressions, postural shifts, hand-to-face gestures, system interaction events, and textual dialogue messages of a user interacting with a human tutor in a computermediated tutorial session. We use these models to predict the user's learning gain, frustration, and engagement at the end of the session. In particular, we examine the behavior of users based on their Extraversion trait score of a Big Five Factor personality survey. The analysis reveals a variety of personality-specific sequences of behavior that are significantly indicative of cognitive and affective outcomes. These results could impact user experience design of future interactive systems

    Faces of Focus:A Study on the Facial Cues of Attentional States

    No full text
    Automatically detecting attentional states is a prerequisite for designing interventions to manage attention - knowledge workers' most critical resource. As a first step towards this goal, it is necessary to understand how different attentional states are made discernible through visible cues in knowledge workers. In this paper, we demonstrate the important facial cues to detect attentional states by evaluating a data set of 15 participants that we tracked over a whole workday, which included their challenge and engagement levels. Our evaluation shows that gaze, pitch, and lips part action units are indicators of engaged work; while pitch, gaze movements, gaze angle, and upper-lid raiser action units are indicators of challenging work. These findings reveal a significant relationship between facial cues and both engagement and challenge levels experienced by our tracked participants. Our work contributes to the design of future studies to detect attentional states based on facial cues
    corecore